Generative Poisoning Attack Method Against Neural Networks

نویسندگان

  • Chaofei Yang
  • Qing Wu
  • Hai Li
  • Yiran Chen
چکیده

Poisoning attack is identified as a severe security threat to machine learning algorithms. In many applications, for example, deep neural network (DNN) models collect public data as the inputs to perform re-training, where the input data can be poisoned. Although poisoning attack against support vector machines (SVM) has been extensively studied before, there is still very limited knowledge about how such attack can be implemented on neural networks (NN), especially DNNs. In this work, we first examine the possibility of applying traditional gradient-based method (named as the direct gradient method) to generate poisoned data against NNs by leveraging the gradient of the target model w.r.t. the normal data. We then propose a generative method to accelerate the generation rate of the poisoned data: an auto-encoder (generator) used to generate poisoned data is updated by a reward function of the loss, and the target NN model (discriminator) receives the poisoned data to calculate the loss w.r.t. the normal data. Our experiment results show that the generative method can speed up the poisoned data generation rate by up to 239.38× compared with the direct gradient method, with slightly lower model accuracy degradation. A countermeasure is also designed to detect such poisoning attack methods by checking the loss of the target model.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Defense-gan: Protecting Classifiers against Adversarial Attacks Using Generative Models

In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend de...

متن کامل

Are Generative Classifiers More Robust to Adversarial Attacks?

There is a rising interest in studying the robustness of deep neural network classifiers against adversaries, with both advanced attack and defence techniques being actively developed. However, most recent work focuses on discriminative classifiers which only models the conditional distribution of the labels given the inputs. In this abstract we propose deep Bayes classifier that improves the c...

متن کامل

Securing AODV routing protocol against the black hole attack using Firefly algorithm

Mobile ad hoc networks are networks composed of wireless devices to create a network with the ability for self-organization. These networks are designed as a new generation of computer networks to satisfy some specific requirements and with features different from wired networks. These networks have no fixed communication infrastructure and for communication with other nodes the intermediate no...

متن کامل

Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition

In this paper we show that misclassification attacks against face-recognition systems based on deep neural networks (DNNs) are more dangerous than previously demonstrated, even in contexts where the adversary can manipulate only her physical appearance (versus directly manipulating the image input to the DNN). Specifically, we show how to create eyeglasses that, when worn, can succeed in target...

متن کامل

Spectrum Sensing Data Falsification Attack in Cognitive Radio Networks: An Analytical Model for Evaluation and Mitigation of Performance Degradation

Cognitive Radio (CR) networks enable dynamic spectrum access and can significantly improve spectral efficiency. Cooperative Spectrum Sensing (CSS) exploits the spatial diversity between CR users to increase sensing accuracy. However, in a realistic scenario, the trustworthy of CSS is vulnerable to Spectrum Sensing Data Falsification (SSDF) attack. In an SSDF attack, some malicious CR users deli...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1703.01340  شماره 

صفحات  -

تاریخ انتشار 2017